Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 144.223
Filter
1.
J Biomed Opt ; 29(Suppl 2): S22702, 2025 Dec.
Article in English | MEDLINE | ID: mdl-38434231

ABSTRACT

Significance: Advancements in label-free microscopy could provide real-time, non-invasive imaging with unique sources of contrast and automated standardized analysis to characterize heterogeneous and dynamic biological processes. These tools would overcome challenges with widely used methods that are destructive (e.g., histology, flow cytometry) or lack cellular resolution (e.g., plate-based assays, whole animal bioluminescence imaging). Aim: This perspective aims to (1) justify the need for label-free microscopy to track heterogeneous cellular functions over time and space within unperturbed systems and (2) recommend improvements regarding instrumentation, image analysis, and image interpretation to address these needs. Approach: Three key research areas (cancer research, autoimmune disease, and tissue and cell engineering) are considered to support the need for label-free microscopy to characterize heterogeneity and dynamics within biological systems. Based on the strengths (e.g., multiple sources of molecular contrast, non-invasive monitoring) and weaknesses (e.g., imaging depth, image interpretation) of several label-free microscopy modalities, improvements for future imaging systems are recommended. Conclusion: Improvements in instrumentation including strategies that increase resolution and imaging speed, standardization and centralization of image analysis tools, and robust data validation and interpretation will expand the applications of label-free microscopy to study heterogeneous and dynamic biological systems.


Subject(s)
Histological Techniques , Microscopy , Animals , Flow Cytometry , Image Processing, Computer-Assisted
2.
Methods Mol Biol ; 2787: 3-38, 2024.
Article in English | MEDLINE | ID: mdl-38656479

ABSTRACT

In this chapter, we explore the application of high-throughput crop phenotyping facilities for phenotype data acquisition and the extraction of significant information from the collected data through image processing and data mining methods. Additionally, the construction and outlook of crop phenotype databases are introduced and the need for global cooperation and data sharing is emphasized. High-throughput crop phenotyping significantly improves accuracy and efficiency compared to traditional measurements, making significant contributions to overcoming bottlenecks in the phenotyping field and advancing crop genetics.


Subject(s)
Crops, Agricultural , Data Mining , Image Processing, Computer-Assisted , Phenotype , Crops, Agricultural/genetics , Crops, Agricultural/growth & development , Data Mining/methods , Image Processing, Computer-Assisted/methods , Data Management/methods , High-Throughput Screening Assays/methods
3.
Artif Intell Med ; 151: 102828, 2024 May.
Article in English | MEDLINE | ID: mdl-38564879

ABSTRACT

Reliable large-scale cell detection and segmentation is the fundamental first step to understanding biological processes in the brain. The ability to phenotype cells at scale can accelerate preclinical drug evaluation and system-level brain histology studies. The impressive advances in deep learning offer a practical solution to cell image detection and segmentation. Unfortunately, categorizing cells and delineating their boundaries for training deep networks is an expensive process that requires skilled biologists. This paper presents a novel self-supervised Dual-Loss Adaptive Masked Autoencoder (DAMA) for learning rich features from multiplexed immunofluorescence brain images. DAMA's objective function minimizes the conditional entropy in pixel-level reconstruction and feature-level regression. Unlike existing self-supervised learning methods based on a random image masking strategy, DAMA employs a novel adaptive mask sampling strategy to maximize mutual information and effectively learn brain cell data. To the best of our knowledge, this is the first effort to develop a self-supervised learning method for multiplexed immunofluorescence brain images. Our extensive experiments demonstrate that DAMA features enable superior cell detection, segmentation, and classification performance without requiring many annotations. In addition, to examine the generalizability of DAMA, we also experimented on TissueNet, a multiplexed imaging dataset comprised of two-channel fluorescence images from six distinct tissue types, captured using six different imaging platforms. Our code is publicly available at https://github.com/hula-ai/DAMA.


Subject(s)
Brain , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Supervised Machine Learning , Humans , Deep Learning , Animals , Algorithms , Neuroimaging/methods
4.
Nat Commun ; 15(1): 3530, 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38664422

ABSTRACT

This paper explicates a solution to building correspondences between molecular-scale transcriptomics and tissue-scale atlases. This problem arises in atlas construction and cross-specimen/technology alignment where specimens per emerging technology remain sparse and conventional image representations cannot efficiently model the high dimensions from subcellular detection of thousands of genes. We address these challenges by representing spatial transcriptomics data as generalized functions encoding position and high-dimensional feature (gene, cell type) identity. We map onto low-dimensional atlas ontologies by modeling regions as homogeneous random fields with unknown transcriptomic feature distribution. We solve simultaneously for the minimizing geodesic diffeomorphism of coordinates through LDDMM and for these latent feature densities. We map tissue-scale mouse brain atlases to gene-based and cell-based transcriptomics data from MERFISH and BARseq technologies and to histopathology and cross-species atlases to illustrate integration of diverse molecular and cellular datasets into a single coordinate system as a means of comparison and further atlas construction.


Subject(s)
Atlases as Topic , Brain , Transcriptome , Animals , Brain/metabolism , Mice , Transcriptome/genetics , Image Processing, Computer-Assisted/methods , Gene Expression Profiling/methods , Humans
5.
Sci Rep ; 14(1): 9501, 2024 04 25.
Article in English | MEDLINE | ID: mdl-38664436

ABSTRACT

The use of various kinds of magnetic resonance imaging (MRI) techniques for examining brain tissue has increased significantly in recent years, and manual investigation of each of the resulting images can be a time-consuming task. This paper presents an automatic brain-tumor diagnosis system that uses a CNN for detection, classification, and segmentation of glioblastomas; the latter stage seeks to segment tumors inside glioma MRI images. The structure of the developed multi-unit system consists of two stages. The first stage is responsible for tumor detection and classification by categorizing brain MRI images into normal, high-grade glioma (glioblastoma), and low-grade glioma. The uniqueness of the proposed network lies in its use of different levels of features, including local and global paths. The second stage is responsible for tumor segmentation, and skip connections and residual units are used during this step. Using 1800 images extracted from the BraTS 2017 dataset, the detection and classification stage was found to achieve a maximum accuracy of 99%. The segmentation stage was then evaluated using the Dice score, specificity, and sensitivity. The results showed that the suggested deep-learning-based system ranks highest among a variety of different strategies reported in the literature.


Subject(s)
Brain Neoplasms , Magnetic Resonance Imaging , Neural Networks, Computer , Humans , Brain Neoplasms/diagnostic imaging , Brain Neoplasms/pathology , Brain Neoplasms/diagnosis , Magnetic Resonance Imaging/methods , Deep Learning , Glioma/diagnostic imaging , Glioma/pathology , Glioma/diagnosis , Glioblastoma/diagnostic imaging , Glioblastoma/diagnosis , Glioblastoma/pathology , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging , Brain/pathology , Image Interpretation, Computer-Assisted/methods
6.
Sci Rep ; 14(1): 9481, 2024 04 25.
Article in English | MEDLINE | ID: mdl-38664466

ABSTRACT

In demersal trawl fisheries, the unavailability of the catch information until the end of the catching process is a drawback, leading to seabed impacts, bycatches and reducing the economic performance of the fisheries. The emergence of in-trawl cameras to observe catches in real-time can provide such information. This data needs to be processed in real-time to determine the catch compositions and rates, eventually improving sustainability and economic performance of the fisheries. In this study, a real-time underwater video processing system counting the Nephrops individuals entering the trawl has been developed using object detection and tracking methods on an edge device (NVIDIA Jetson AGX Orin). Seven state-of-the-art YOLO models were tested to discover the appropriate training settings and YOLO model. To achieve real-time processing and accurate counting simultaneously, four frame skipping ideas were evaluated. It has been shown that adaptive frame skipping approach, together with YOLOv8s model, can increase the processing speed up to 97.47 FPS while achieving correct count rate and F-score of 82.57% and 0.86, respectively. In conclusion, this system can improve the sustainability of the Nephrops directed trawl fishery by providing catch information in real-time.


Subject(s)
Fisheries , Animals , Video Recording/methods , Fishes/physiology , Image Processing, Computer-Assisted/methods , Algorithms , Models, Theoretical
7.
Exp Dermatol ; 33(4): e15082, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38664884

ABSTRACT

As a chronic relapsing disease, psoriasis is characterized by widespread skin lesions. The Psoriasis Area and Severity Index (PASI) is the most frequently utilized tool for evaluating the severity of psoriasis in clinical practice. Nevertheless, long-term monitoring and precise evaluation pose difficulties for dermatologists and patients, which is time-consuming, subjective and prone to evaluation bias. To develop a deep learning system with high accuracy and speed to assist PASI evaluation, we collected 2657 high-quality images from 1486 psoriasis patients, and images were segmented and annotated. Then, we utilized the YOLO-v4 algorithm to establish the model via four modules, we also conducted a human-computer comparison through quadratic weighted Kappa (QWK) coefficients and intra-class correlation coefficients (ICC). The YOLO-v4 algorithm was selected for model training and optimization compared with the YOLOv3, RetinaNet, EfficientDet and Faster_rcnn. The model evaluation results of mean average precision (mAP) for various lesion features were as follows: erythema, mAP = 0.903; scale, mAP = 0.908; and induration, mAP = 0.882. In addition, the results of human-computer comparison also showed a median consistency for the skin lesion severity and an excellent consistency for the area and PASI score. Finally, an intelligent PASI app was established for remote disease assessment and course management, with a pleasurable agreement with dermatologists. Taken together, we proposed an intelligent PASI app based on the image YOLO-v4 algorithm that can assist dermatologists in long-term and objective PASI scoring, shedding light on similar clinical assessments that can be assisted by computers in a time-saving and objective manner.


Subject(s)
Algorithms , Deep Learning , Psoriasis , Severity of Illness Index , Psoriasis/pathology , Humans , Image Processing, Computer-Assisted/methods
8.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38588646

ABSTRACT

Objective.In current radiograph-based intra-fraction markerless target-tracking, digitally reconstructed radiographs (DRRs) from planning CTs (CT-DRRs) are often used to train deep learning models that extract information from the intra-fraction radiographs acquired during treatment. Traditional DRR algorithms were designed for patient alignment (i.e.bone matching) and may not replicate the radiographic image quality of intra-fraction radiographs at treatment. Hypothetically, generating DRRs from pre-treatment Cone-Beam CTs (CBCT-DRRs) with DRR algorithms incorporating physical modelling of on-board-imagers (OBIs) could improve the similarity between intra-fraction radiographs and DRRs by eliminating inter-fraction variation and reducing image-quality mismatches between radiographs and DRRs. In this study, we test the two hypotheses that intra-fraction radiographs are more similar to CBCT-DRRs than CT-DRRs, and that intra-fraction radiographs are more similar to DRRs from algorithms incorporating physical models of OBI components than DRRs from algorithms omitting these models.Approach.DRRs were generated from CBCT and CT image sets collected from 20 patients undergoing pancreas stereotactic body radiotherapy. CBCT-DRRs and CT-DRRs were generated replicating the treatment position of patients and the OBI geometry during intra-fraction radiograph acquisition. To investigate whether the modelling of physical OBI components influenced radiograph-DRR similarity, four DRR algorithms were applied for the generation of CBCT-DRRs and CT-DRRs, incorporating and omitting different combinations of OBI component models. The four DRR algorithms were: a traditional DRR algorithm, a DRR algorithm with source-spectrum modelling, a DRR algorithm with source-spectrum and detector modelling, and a DRR algorithm with source-spectrum, detector and patient material modelling. Similarity between radiographs and matched DRRs was quantified using Pearson's correlation and Czekanowski's index, calculated on a per-image basis. Distributions of correlations and indexes were compared to test each of the hypotheses. Distribution differences were determined to be statistically significant when Wilcoxon's signed rank test and the Kolmogorov-Smirnov two sample test returnedp≤ 0.05 for both tests.Main results.Intra-fraction radiographs were more similar to CBCT-DRRs than CT-DRRs for both metrics across all algorithms, with allp≤ 0.007. Source-spectrum modelling improved radiograph-DRR similarity for both metrics, with allp< 10-6. OBI detector modelling and patient material modelling did not influence radiograph-DRR similarity for either metric.Significance.Generating DRRs from pre-treatment CBCT-DRRs is feasible, and incorporating CBCT-DRRs into markerless target-tracking methods may promote improved target-tracking accuracies. Incorporating source-spectrum modelling into a treatment planning system's DRR algorithms may reinforce the safe treatment of cancer patients by aiding in patient alignment.


Subject(s)
Algorithms , Cone-Beam Computed Tomography , Pancreatic Neoplasms , Radiosurgery , Humans , Cone-Beam Computed Tomography/methods , Radiosurgery/methods , Pancreatic Neoplasms/radiotherapy , Pancreatic Neoplasms/diagnostic imaging , Image Processing, Computer-Assisted/methods , Radiotherapy Planning, Computer-Assisted/methods , Deep Learning , Tomography, X-Ray Computed/methods , Pancreas/diagnostic imaging , Pancreas/surgery , Phantoms, Imaging
9.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38588648

ABSTRACT

Objective. Ultrasound-assisted orthopaedic navigation held promise due to its non-ionizing feature, portability, low cost, and real-time performance. To facilitate the applications, it was critical to have accurate and real-time bone surface segmentation. Nevertheless, the imaging artifacts and low signal-to-noise ratios in the tomographical B-mode ultrasound (B-US) images created substantial challenges in bone surface detection. In this study, we presented an end-to-end lightweight US bone segmentation network (UBS-Net) for bone surface detection.Approach. We presented an end-to-end lightweight UBS-Net for bone surface detection, using the U-Net structure as the base framework and a level set loss function for improved sensitivity to bone surface detectability. A dual attention (DA) mechanism was introduced at the end of the encoder, which considered both position and channel information to obtain the correlation between the position and channel dimensions of the feature map, where axial attention (AA) replaced the traditional self-attention (SA) mechanism in the position attention module for better computational efficiency. The position attention and channel attention (CA) were combined with a two-class fusion module for the DA map. The decoding module finally completed the bone surface detection.Main Results. As a result, a frame rate of 21 frames per second (fps) in detection were achieved. It outperformed the state-of-the-art method with higher segmentation accuracy (Dice similarity coefficient: 88.76% versus 87.22%) when applied the retrospective ultrasound (US) data from 11 volunteers.Significance. The proposed UBS-Net for bone surface detection in ultrasound achieved outstanding accuracy and real-time performance. The new method out-performed the state-of-the-art methods. It had potential in US-guided orthopaedic surgery applications.


Subject(s)
Image Processing, Computer-Assisted , Signal-To-Noise Ratio , Ultrasonography , Humans , Ultrasonography/methods , Image Processing, Computer-Assisted/methods , Algorithms , Bone and Bones/diagnostic imaging , Neural Networks, Computer
10.
Phys Med Biol ; 69(10)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38593821

ABSTRACT

Objective. The textures and detailed structures in computed tomography (CT) images are highly desirable for clinical diagnosis. This study aims to expand the current body of work on textures and details preserving convolutional neural networks for low-dose CT (LDCT) image denoising task.Approach. This study proposed a novel multi-scale feature aggregation and fusion network (MFAF-net) for LDCT image denoising. Specifically, we proposed a multi-scale residual feature aggregation module to characterize multi-scale structural information in CT images, which captures regional-specific inter-scale variations using learned weights. We further proposed a cross-level feature fusion module to integrate cross-level features, which adaptively weights the contributions of features from encoder to decoder by using a spatial pyramid attention mechanism. Moreover, we proposed a self-supervised multi-level perceptual loss module to generate multi-level auxiliary perceptual supervision for recovery of salient textures and structures of tissues and lesions in CT images, which takes advantage of abundant semantic information at various levels. We introduced parameters for the perceptual loss to adaptively weight the contributions of auxiliary features of different levels and we also introduced an automatic parameter tuning strategy for these parameters.Main results. Extensive experimental studies were performed to validate the effectiveness of the proposed method. Experimental results demonstrate that the proposed method can achieve better performance on both fine textures preservation and noise suppression for CT image denoising task compared with other competitive convolutional neural network (CNN) based methods.Significance. The proposed MFAF-net takes advantage of multi-scale receptive fields, cross-level features integration and self-supervised multi-level perceptual loss, enabling more effective recovering of fine textures and detailed structures of tissues and lesions in CT images.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods , Humans , Neural Networks, Computer , Radiation Dosage , Signal-To-Noise Ratio
11.
Phys Med Biol ; 69(10)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38593827

ABSTRACT

Objective.To address the challenge of meningioma grading, this study aims to investigate the potential value of peritumoral edema (PTE) regions and proposes a unique approach that integrates radiomics and deep learning techniques.Approach.The primary focus is on developing a transfer learning-based meningioma feature extraction model (MFEM) that leverages both vision transformer (ViT) and convolutional neural network (CNN) architectures. Additionally, the study explores the significance of the PTE region in enhancing the grading process.Main results.The proposed method demonstrates excellent grading accuracy and robustness on a dataset of 98 meningioma patients. It achieves an accuracy of 92.86%, precision of 93.44%, sensitivity of 95%, and specificity of 89.47%.Significance.This study provides valuable insights into preoperative meningioma grading by introducing an innovative method that combines radiomics and deep learning techniques. The approach not only enhances accuracy but also reduces observer subjectivity, thereby contributing to improved clinical decision-making processes.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted , Meningioma , Neoplasm Grading , Meningioma/diagnostic imaging , Meningioma/pathology , Humans , Image Processing, Computer-Assisted/methods , Edema/diagnostic imaging , Meningeal Neoplasms/diagnostic imaging , Meningeal Neoplasms/pathology , 60570
12.
Physiol Meas ; 45(4)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38599227

ABSTRACT

Objective.In cardiovascular magnetic resonance imaging, synchronization of image acquisition with heart motion (calledgating) is performed by detecting R-peaks in electrocardiogram (ECG) signals. Effective gating is challenging with 3T and 7T scanners, due to severe distortion of ECG signals caused by magnetohydrodynamic effects associated with intense magnetic fields. This work proposes an efficient retrospective gating strategy that requires no prior training outside the scanner and investigates the optimal number of leads in the ECG acquisition set.Approach.The proposed method was developed on a data set of 12-lead ECG signals acquired within 3T and 7T scanners. Independent component analysis is employed to effectively separate components related with cardiac activity from those associated to noise. Subsequently, an automatic selection process identifies the components best suited for accurate R-peak detection, based on heart rate estimation metrics and frequency content quality indexes.Main results.The proposed method is robust to different B0 field strengths, as evidenced by R-peak detection errors of 2.4 ± 3.1 ms and 10.6 ± 15.4 ms for data acquired with 3T and 7T scanners, respectively. Its effectiveness was verified with various subject orientations, showcasing applicability in diverse clinical scenarios. The work reveals that ECG leads can be limited in number to three, or at most five for 7T field strengths, without significant degradation in R-peak detection accuracy.Significance.The approach requires no preliminary ECG acquisition for R-peak detector training, reducing overall examination time. The gating process is designed to be adaptable, completely blind and independent of patient characteristics, allowing wide and rapid deployment in clinical practice. The potential to employ a significantly limited set of leads enhances patient comfort.


Subject(s)
Electrocardiography , Heart , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Heart/diagnostic imaging , Heart/physiology , Image Processing, Computer-Assisted/methods , Signal Processing, Computer-Assisted , Male , Adult , Heart Rate , Cardiac-Gated Imaging Techniques/methods , Female , Retrospective Studies
13.
Surg Innov ; 31(3): 291-306, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38619039

ABSTRACT

OBJECTIVE: To propose a transfer learning based method of tumor segmentation in intraoperative fluorescence images, which will assist surgeons to efficiently and accurately identify the boundary of tumors of interest. METHODS: We employed transfer learning and deep convolutional neural networks (DCNNs) for tumor segmentation. Specifically, we first pre-trained four networks on the ImageNet dataset to extract low-level features. Subsequently, we fine-tuned these networks on two fluorescence image datasets (ABFM and DTHP) separately to enhance the segmentation performance of fluorescence images. Finally, we tested the trained models on the DTHL dataset. The performance of this approach was compared and evaluated against DCNNs trained end-to-end and the traditional level-set method. RESULTS: The transfer learning-based UNet++ model achieved high segmentation accuracies of 82.17% on the ABFM dataset, 95.61% on the DTHP dataset, and 85.49% on the DTHL test set. For the DTHP dataset, the pre-trained Deeplab v3 + network performed exceptionally well, with a segmentation accuracy of 96.48%. Furthermore, all models achieved segmentation accuracies of over 90% when dealing with the DTHP dataset. CONCLUSION: To the best of our knowledge, this study explores tumor segmentation on intraoperative fluorescent images for the first time. The results show that compared to traditional methods, deep learning has significant advantages in improving segmentation performance. Transfer learning enables deep learning models to perform better on small-sample fluorescence image data compared to end-to-end training. This discovery provides strong support for surgeons to obtain more reliable and accurate image segmentation results during surgery.


Subject(s)
Neural Networks, Computer , Optical Imaging , Humans , Optical Imaging/methods , Neoplasms/surgery , Neoplasms/diagnostic imaging , Deep Learning , Image Processing, Computer-Assisted/methods , Surgery, Computer-Assisted/methods
14.
Biomed Phys Eng Express ; 10(3)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38631317

ABSTRACT

Introduction. The currently available dosimetry techniques in computed tomography can be inaccurate which overestimate the absorbed dose. Therefore, we aimed to provide an automated and fast methodology to more accurately calculate the SSDE usingDwobtained by using CNN from thorax and abdominal CT study images.Methods. The SSDE was determined from the 200 records files. For that purpose, patients' size was measured in two ways: (a) by developing an algorithm following the AAPM Report No. 204 methodology; and (b) using a CNN according to AAPM Report No. 220.Results. The patient's size measured by the in-house software in the region of thorax and abdomen was 27.63 ± 3.23 cm and 28.66 ± 3.37 cm, while CNN was 18.90 ± 2.6 cm and 21.77 ± 2.45 cm. The SSDE in thorax according to 204 and 220 reports were 17.26 ± 2.81 mGy and 23.70 ± 2.96 mGy for women and 17.08 ± 2.09 mGy and 23.47 ± 2.34 mGy for men. In abdomen was 18.54 ± 2.25 mGy and 23.40 ± 1.88 mGy in women and 18.37 ± 2.31 mGy and 23.84 ± 2.36 mGy in men.Conclusions. Implementing CNN-based automated methodologies can contribute to fast and accurate dose calculations, thereby improving patient-specific radiation safety in clinical practice.


Subject(s)
Algorithms , Radiation Dosage , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Male , Female , Body Size , Neural Networks, Computer , Software , Automation , Thorax/diagnostic imaging , Adult , Abdomen/diagnostic imaging , Radiometry/methods , Radiography, Thoracic/methods , Middle Aged , Image Processing, Computer-Assisted/methods , Radiography, Abdominal/methods , Aged
15.
Elife ; 122024 Apr 18.
Article in English | MEDLINE | ID: mdl-38634855

ABSTRACT

Despite much progress, image processing remains a significant bottleneck for high-throughput analysis of microscopy data. One popular platform for single-cell time-lapse imaging is the mother machine, which enables long-term tracking of microbial cells under precisely controlled growth conditions. While several mother machine image analysis pipelines have been developed in the past several years, adoption by a non-expert audience remains a challenge. To fill this gap, we implemented our own software, MM3, as a plugin for the multidimensional image viewer napari. napari-MM3 is a complete and modular image analysis pipeline for mother machine data, which takes advantage of the high-level interactivity of napari. Here, we give an overview of napari-MM3 and test it against several well-designed and widely used image analysis pipelines, including BACMMAN and DeLTA. Researchers often analyze mother machine data with custom scripts using varied image analysis methods, but a quantitative comparison of the output of different pipelines has been lacking. To this end, we show that key single-cell physiological parameter correlations and distributions are robust to the choice of analysis method. However, we also find that small changes in thresholding parameters can systematically alter parameters extracted from single-cell imaging experiments. Moreover, we explicitly show that in deep learning-based segmentation, 'what you put is what you get' (WYPIWYG) - that is, pixel-level variation in training data for cell segmentation can propagate to the model output and bias spatial and temporal measurements. Finally, while the primary purpose of this work is to introduce the image analysis software that we have developed over the last decade in our lab, we also provide information for those who want to implement mother machine-based high-throughput imaging and analysis methods in their research.


Subject(s)
Image Processing, Computer-Assisted , Mothers , Female , Humans , Microscopy , Culture , Research Personnel
16.
Sci Rep ; 14(1): 8924, 2024 04 18.
Article in English | MEDLINE | ID: mdl-38637613

ABSTRACT

Accurate measurement of abdominal aortic aneurysm is essential for selecting suitable stent-grafts to avoid complications of endovascular aneurysm repair. However, the conventional image-based measurements are inaccurate and time-consuming. We introduce the automated workflow including semantic segmentation with active learning (AL) and measurement using an application programming interface of computer-aided design. 300 patients underwent CT scans, and semantic segmentation for aorta, thrombus, calcification, and vessels was performed in 60-300 cases with AL across five stages using UNETR, SwinUNETR, and nnU-Net consisted of 2D, 3D U-Net, 2D-3D U-Net ensemble, and cascaded 3D U-Net. 7 clinical landmarks were automatically measured for 96 patients. In AL stage 5, 3D U-Net achieved the highest dice similarity coefficient (DSC) with statistically significant differences (p < 0.01) except from the 2D-3D U-Net ensemble and cascade 3D U-Net. SwinUNETR excelled in 95% Hausdorff distance (HD95) with significant differences (p < 0.01) except from UNETR and 3D U-Net. DSC of aorta and calcification were saturated at stage 1 and 4, whereas thrombus and vessels were continuously improved at stage 5. The segmentation time between the manual and AL-corrected segmentation using the best model (3D U-Net) was reduced to 9.51 ± 1.02, 2.09 ± 1.06, 1.07 ± 1.10, and 1.07 ± 0.97 min for the aorta, thrombus, calcification, and vessels, respectively (p < 0.001). All measurement and tortuosity ratio measured - 1.71 ± 6.53 mm and - 0.15 ± 0.25. We developed an automated workflow with semantic segmentation and measurement, demonstrating its efficiency compared to conventional methods.


Subject(s)
Aortic Aneurysm, Abdominal , Blood Vessel Prosthesis Implantation , Calcinosis , Endovascular Procedures , Thrombosis , Humans , Aortic Aneurysm, Abdominal/diagnostic imaging , Problem-Based Learning , Semantics , Tomography, X-Ray Computed , Image Processing, Computer-Assisted
17.
BMC Oral Health ; 24(1): 490, 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38658959

ABSTRACT

BACKGROUND: Deep learning model trained on a large image dataset, can be used to detect and discriminate targets with similar but not identical appearances. The aim of this study is to evaluate the post-training performance of the CNN-based YOLOv5x algorithm in the detection of white spot lesions in post-orthodontic oral photographs using the limited data available and to make a preliminary study for fully automated models that can be clinically integrated in the future. METHODS: A total of 435 images in JPG format were uploaded into the CranioCatch labeling software and labeled white spot lesions. The labeled images were resized to 640 × 320 while maintaining their aspect ratio before model training. The labeled images were randomly divided into three groups (Training:349 images (1589 labels), Validation:43 images (181 labels), Test:43 images (215 labels)). YOLOv5x algorithm was used to perform deep learning. The segmentation performance of the tested model was visualized and analyzed using ROC analysis and a confusion matrix. True Positive (TP), False Positive (FP), and False Negative (FN) values were determined. RESULTS: Among the test group images, there were 133 TPs, 36 FPs, and 82 FNs. The model's performance metrics include precision, recall, and F1 score values of detecting white spot lesions were 0.786, 0.618, and 0.692. The AUC value obtained from the ROC analysis was 0.712. The mAP value obtained from the Precision-Recall curve graph was 0.425. CONCLUSIONS: The model's accuracy and sensitivity in detecting white spot lesions remained lower than expected for practical application, but is a promising and acceptable detection rate compared to previous study. The current study provides a preliminary insight to further improved by increasing the dataset for training, and applying modifications to the deep learning algorithm. CLINICAL REVELANCE: Deep learning systems can help clinicians to distinguish white spot lesions that may be missed during visual inspection.


Subject(s)
Algorithms , Deep Learning , Humans , Pilot Projects , Photography, Dental/methods , Image Processing, Computer-Assisted/methods , White
18.
J Biomed Opt ; 29(4): 046008, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38659998

ABSTRACT

Significance: Optical imaging is a non-invasive imaging technology that utilizes near-infrared light, allows for the image reconstruction of optical properties like diffuse and absorption coefficients within the tissue. A recent trend is to use signal processing techniques or new light sources and expanding its application. Aim: We aim to develop the reflective optical imaging using the chaotic correlation technology with chaotic laser and optimize the quality and spatial resolution of reflective optical imaging. Approach: Scattering medium was measured using reflective configuration in different inhomogeneous regions to evaluate the performance of the imaging system. The accuracy of the recovered optical properties was investigated. The reconstruction errors of absorption coefficients and geometric centers were analyzed, and the feature metrics of the reconstructed images were evaluated. Results: We showed how chaotic correlation technology can be utilized for information extraction and image reconstruction. This means that a higher signal-to-noise ratio and image reconstruction of inhomogeneous phantoms under different scenarios successfully were achieved. Conclusions: This work highlights that the peak values of correlation of chaotic exhibit smaller reconstruction error and better reconstruction performance in optical imaging compared with reflective optical imaging with the continuous wave laser.


Subject(s)
Image Processing, Computer-Assisted , Lasers , Optical Imaging , Phantoms, Imaging , Scattering, Radiation , Optical Imaging/methods , Image Processing, Computer-Assisted/methods , Signal-To-Noise Ratio , Nonlinear Dynamics , Algorithms , Equipment Design
19.
Technol Cancer Res Treat ; 23: 15330338241245943, 2024.
Article in English | MEDLINE | ID: mdl-38660703

ABSTRACT

BACKGROUND: Hepatocellular carcinoma (HCC) is a serious health concern because of its high morbidity and mortality. The prognosis of HCC largely depends on the disease stage at diagnosis. Computed tomography (CT) image textural analysis is an image analysis technique that has emerged in recent years. OBJECTIVE: To probe the feasibility of a CT radiomic model for predicting early (stages 0, A) and intermediate (stage B) HCC using Barcelona Clinic Liver Cancer (BCLC) staging. METHODS: A total of 190 patients with stages 0, A, or B HCC according to CT-enhanced arterial and portal vein phase images were retrospectively assessed. The lesions were delineated manually to construct a region of interest (ROI) consisting of the entire tumor mass. Consequently, the textural profiles of the ROIs were extracted by specific software. Least absolute shrinkage and selection operator dimensionality reduction was used to screen the textural profiles and obtain the area under the receiver operating characteristic curve values. RESULTS: Within the test cohort, the area under the curve (AUC) values associated with arterial-phase images and BCLC stages 0, A, and B disease were 0.99, 0.98, and 0.99, respectively. The overall accuracy rate was 92.7%. The AUC values associated with portal vein phase images and BCLC stages 0, A, and B disease were 0.98, 0.95, and 0.99, respectively, with an overall accuracy of 90.9%. CONCLUSION: The CT radiomic model can be used to predict the BCLC stage of early-stage and intermediate-stage HCC.


Subject(s)
Carcinoma, Hepatocellular , Feasibility Studies , Liver Neoplasms , Neoplasm Staging , ROC Curve , Tomography, X-Ray Computed , Humans , Carcinoma, Hepatocellular/diagnostic imaging , Carcinoma, Hepatocellular/pathology , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/pathology , Male , Tomography, X-Ray Computed/methods , Female , Middle Aged , Aged , Retrospective Studies , Prognosis , Adult , Image Processing, Computer-Assisted/methods , Area Under Curve , 60570
20.
Sci Rep ; 14(1): 9245, 2024 04 22.
Article in English | MEDLINE | ID: mdl-38649692

ABSTRACT

Radiological imaging to examine intracranial blood vessels is critical for preoperative planning and postoperative follow-up. Automated segmentation of cerebrovascular anatomy from Time-Of-Flight Magnetic Resonance Angiography (TOF-MRA) can provide radiologists with a more detailed and precise view of these vessels. This paper introduces a domain generalized artificial intelligence (AI) solution for volumetric monitoring of cerebrovascular structures from multi-center MRAs. Our approach utilizes a multi-task deep convolutional neural network (CNN) with a topology-aware loss function to learn voxel-wise segmentation of the cerebrovascular tree. We use Decorrelation Loss to achieve domain regularization for the encoder network and auxiliary tasks to provide additional regularization and enable the encoder to learn higher-level intermediate representations for improved performance. We compare our method to six state-of-the-art 3D vessel segmentation methods using retrospective TOF-MRA datasets from multiple private and public data sources scanned at six hospitals, with and without vascular pathologies. The proposed model achieved the best scores in all the qualitative performance measures. Furthermore, we have developed an AI-assisted Graphical User Interface (GUI) based on our research to assist radiologists in their daily work and establish a more efficient work process that saves time.


Subject(s)
Magnetic Resonance Angiography , Neural Networks, Computer , Workflow , Humans , Magnetic Resonance Angiography/methods , Artificial Intelligence , Retrospective Studies , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...